Web Survey Bibliography
There is a lot of debate about whether questions should be presented on a grid or in a single item per screen. Operationally, grids take less time for respondents to complete. Use
of grids should decrease response burden, although new research shows that respondents seem to prefer a single item per screen. From a measurement point of view, grids pose numerous issues: higher item non-response, higher item non-differentiation, and sometimes higher measurement error.
In this experiment, we are testing the Vitality (4 items) and Mental Health (5 items) scales of the SF-36v2® Health Survey. The SF-36v2 asks 36 questions to measure functional
health and well-being from the patient's point of view. It is called a generic health survey, because it can be used across age (18 and older), disease, and treatment groups, as opposed to a disease-specific health survey which focuses on a particular condition or disease. Two of the four items of the vitality scale and two out of five items of the mental health scale are reversed in scoring.
A sample of 2,500 KnowledgePanel® respondents was randomly assigned to one of five experimental conditions: Group 1: Standard grid; Group 2: Shaded grid; Group 3: One item per screen with horizontal response options; Group 4: One item per screen with vertical response options; Group 5: One item per screen with vertical shaded response options. Approximately 360 respondents completed the survey per condition for a completion rate of 73.4%. The survey was optimized to be seen on a screen with minimum resolution of 800 by 600 pixels. During the study we collected the browser type for each respondent. This allowed us to exclude cases in which the survey was taken either on a MSNTV or on an iPhone/PDA because they could not properly see the grid items. The final sample used for the analysis, after exclusions, was of 1,419 cases for an average group size of about 280.
We hypothesized that items presented on a grid would lead to more measurement error as indicated by a higher rate of “inconsistencies” in the self-reports to grid questions and a lower rate of inconsistencies in the self-reports to the single-item questions. We speculated that presenting items on a single screen allows the respondent to bring more cognitive focus to each question and therefore be more consistent in their answers to questions. In contrast, when items are on a grid, it is easier for the respondent to get confused, especially when the meaning of some of the items is reversed. We computed an index of consistency by correlating the total sum of scores for the reversed items with the total sum of scores for the non-reversed items. If respondents are consistent in their answers the correlation between reversed and non-reversed should be higher. We calculated Cronbach's alpha scores to measure consistency in answers for each of the five experimental conditions.
The direction of the study findings were consistent with our hypotheses -- lower alpha level for the grid presentation and higher correlation for the single-item presentation -- although the differences among groups do not reach statistical significance.
Homepage (abstract)/(full text)
Web survey bibliography (281)
- Overview: Online Surveys; 2017; Vehovar, V.; Lozar Manfreda, K.
- Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys; 2016
- Retrospective Measurement of Students’ Extracurricular Activities with a Self-administered Calendar...; 2016; Furthmueller, P.
- Pitfalls, Potentials, and Ethics of Online Survey Research: LGBTQ and Other Marginalized and Hard-to...; 2016; McInroy, L. B.
- Computer-assisted and online data collection in general population surveys; 2016; Skarupova, K.
- A Statistical Approach to Provide Individualized Privacy for Surveys; 2016; Esponda, F.; Huerta, K.; Guerrero, V. M.
- Social Media Analyses for Social Measurement; 2016; Schober, M. F.; Pasek, J.; Guggenheim, L.; Lampe, C.; Conrad, F. G.
- Doing Surveys Online ; 2016; Toepoel, V.
- An Overview of Mobile CATI Issues in Europe; 2015; Slavec, A.; Toninelli, D.
- Utilizing iPads in the Field; 2015; Kiser, P.
- Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys 2015; 2015
- The Web Survey Revolution ; 2015; Murray, D.
- Methodology of the RAND Mid-Term 2014 Election Panel; 2015; Carman, K. G; Pollack, S.
- 28 Questions to Help Buyers of Online Samples; 2015; Cape, P. J.; Phillips, A.; Baker, R.; Cooke, M.; Ribeiro, E.; Terhanian, G.
- Ethical decision-making and Internet research 2.0: Recommendations from the AoIR ethics working committee...; 2015; Markham, A.; Buchanan, E. A.
- Doing online research involving university students with disabilities: Methodological issues; 2015; De Cesarei, A.; Baldaro, B.
- Exploring ethical issues associated with using online surveys in educational research; 2015; Roberts, L. D.; Allen, P. J.
- An Introduction to Survey Research; 2015; Cowles, E. L.; Nelson, E.
- Ethical issues in online research; 2015; James, N.; Busher, H.
- Leading Edge Insights: Foundations of Quality 2.0; 2014; Fuguitt, G.
- Methods and systems for managing an online opinion survey service; 2014; Mcloughlin, M. H., Seton, N., Blesy, K.
- Recent Books and Journals in Public Opinion, Survey Methods, and Survey Statistics; 2014; Callegaro, M.
- Undisclosed Privacy: The Effect of Privacy Rights Design on Response Rates; 2014; Haer, R., Meidert, N.
- Tailoring mode of data collection in longitudinal studies; 2013; Kaminska, O., Lynn, P.
- How do we Know Cognitive Interviewing is Any Good?; 2013; Willis, G. B.
- Quality of Web surveys; 2013; Revilla, M.
- Experiments in Obtaining Data Linkage Consent in Web Surveys ; 2013; Sakshaug, J. W., Kreuter, F.
- Response Burden in Official Business Surveys: Measurement and Reduction Practices of National Statistical...; 2013; Giesen, D., Bavdaz, M., Loefgren, T., Raymond-Blaess, V.
- Internet as a new source of information for the production of official statistics. Experiences of Statistics...; 2013; Heerschap, N.
- A standard with quality indicators for web panel surveys: a Swedish example; 2013; Nyfjaell, M.
- How Mobile Stacks Up to Traditional Online: A Comparison of Studies; 2013; Knowles, R.
- How to make your questionnaire mobile-ready; 2013; Cape, P. J.
- Phish Rising: How Internet Criminals are Undermining the Viability of Online Survey Research…and...; 2013; Kunovic, K.
- Self-Reported Participation in Research Practices Among Survey Methodology Researchers; 2013; Perez-Vergara, K., Smith, C., Lowenstein, C., Ozonoff, A., Martins, Y.
- Ethics, privacy and data security in web-based course evaluation; 2013; Salaschek, M., Meese, C., Thielsch, M.
- Beyond methodology - some ethical implications of "doing research online"; 2013; Heise, N.
- Code Comparison; 2012
- Evaluation procedures for Survey questions; 2012; Saris, W. E.
- Transparency, Access and the Credibility of Survey Research; 2012; Lupia, A.
- Anonymity and Confidentiality; 2012; Tourangeau, R.
- Cognitive Evaluation of Survey Instruments: State of the Science (Art?) and Future Directions; 2012; Willis, G. B.
- How to provide high data quality in online-questionnaires: Setting guidelines in design; 2012; Tries, S., Nebel, S., Blanke, K.
- Comparability of Survey Measurements; 2012; Oberski, D.
- Classification of Surveys; 2012; Stoop, I., Harrison, E.
- Enhancing Web Surveys With New HTML5 Input Types; 2012; Funke, F.
- Why one should incorporate the design weights when adjusting for unit nonresponse using response homogeneity...; 2012; Kott, P. S.
- Assessing the Quality of Survey Data ; 2012; Blasius, J.
- Designing and Doing Survey Research; 2012; Andres, L.
- Using break-offs in web interviews for predicting web response in mixed mode surveys; 2011; Beukenhorst, D.
- Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys 2011; 2011